103 research outputs found
Complexity of increasing the secure connectivity in wireless ad hoc networks
We consider the problem of maximizing the secure connectivity in wireless ad hoc networks, and analyze complexity of the post-deployment key establishment process constrained by physical layer properties such as connectivity, energy consumption and interference. Two approaches, based on graph augmentation problems with nonlinear edge costs, are formulated. The first one is based on establishing a secret key using only the links that are already secured by shared keys. This problem is in NP-hard and does not accept polynomial time approximation scheme PTAS since minimum cutsets to be augmented do not admit constant costs. The second one extends the first problem by increasing the power level between a pair of nodes that has a secret key to enable them physically connect. This problem can be formulated as the optimal key establishment problem with interference constraints with bi-objectives: (i) maximizing the concurrent key establishment flow, (ii) minimizing the cost. We prove that both problems are NP-hard and MAX-SNP with a reduction to MAX3SAT problem
SplitFed: When Federated Learning Meets Split Learning
Federated learning (FL) and split learning (SL) are two recent distributed
machine learning (ML) approaches that have gained attention due to their
inherent privacy-preserving capabilities. Both approaches follow a
model-to-data scenario, in that an ML model is sent to clients for network
training and testing. However, FL and SL show contrasting strengths and
weaknesses. For example, while FL performs faster than SL due to its parallel
client-side model generation strategy, SL provides better privacy than FL due
to the ML model architecture split between clients and the server. In contrast
to FL, SL enables ML training with clients having low computing resources as
the client trains only the first few layers of the split ML network model. In
this paper, we present a novel approach, named splitfed (SFL), that amalgamates
the two approaches eliminating their inherent drawbacks. SFL splits the network
architecture between the clients and server as in SL to provide a higher level
of privacy than FL. Moreover, it offers better efficiency than SL by
incorporating the parallel ML model update paradigm of FL. Our empirical
results, on uniformly distributed horizontally partitioned HAM10000 and MNIST
datasets with multiple clients, show that SFL provides similar communication
efficiency and test accuracy as SL, while significantly decreasing - by four to
six times - its computation time per global epoch than in SL for both datasets.
Furthermore, as in SL, its communication efficiency over FL improves with the
number of clients. To further enhance privacy, we integrate a differentially
private local model training mechanism to SFL and test its performance on
AlexNet with the MNIST dataset under various privacy levels
Parameter-Saving Adversarial Training: Reinforcing Multi-Perturbation Robustness via Hypernetworks
Adversarial training serves as one of the most popular and effective methods
to defend against adversarial perturbations. However, most defense mechanisms
only consider a single type of perturbation while various attack methods might
be adopted to perform stronger adversarial attacks against the deployed model
in real-world scenarios, e.g., or . Defending against
various attacks can be a challenging problem since multi-perturbation
adversarial training and its variants only achieve suboptimal robustness
trade-offs, due to the theoretical limit to multi-perturbation robustness for a
single model. Besides, it is impractical to deploy large models in some
storage-efficient scenarios. To settle down these drawbacks, in this paper we
propose a novel multi-perturbation adversarial training framework,
parameter-saving adversarial training (PSAT), to reinforce multi-perturbation
robustness with an advantageous side effect of saving parameters, which
leverages hypernetworks to train specialized models against a single
perturbation and aggregate these specialized models to defend against multiple
perturbations. Eventually, we extensively evaluate and compare our proposed
method with state-of-the-art single/multi-perturbation robust methods against
various latest attack methods on different datasets, showing the robustness
superiority and parameter efficiency of our proposed method, e.g., for the
CIFAR-10 dataset with ResNet-50 as the backbone, PSAT saves approximately 80\%
of parameters with achieving the state-of-the-art robustness trade-off
accuracy.Comment: 9 pages, 2 figure
Quantum-Inspired Machine Learning: a Survey
Quantum-inspired Machine Learning (QiML) is a burgeoning field, receiving
global attention from researchers for its potential to leverage principles of
quantum mechanics within classical computational frameworks. However, current
review literature often presents a superficial exploration of QiML, focusing
instead on the broader Quantum Machine Learning (QML) field. In response to
this gap, this survey provides an integrated and comprehensive examination of
QiML, exploring QiML's diverse research domains including tensor network
simulations, dequantized algorithms, and others, showcasing recent
advancements, practical applications, and illuminating potential future
research avenues. Further, a concrete definition of QiML is established by
analyzing various prior interpretations of the term and their inherent
ambiguities. As QiML continues to evolve, we anticipate a wealth of future
developments drawing from quantum mechanics, quantum computing, and classical
machine learning, enriching the field further. This survey serves as a guide
for researchers and practitioners alike, providing a holistic understanding of
QiML's current landscape and future directions.Comment: 56 pages, 13 figures, 8 table
SplITS: Split Input-to-State Mapping for Effective Firmware Fuzzing
Ability to test firmware on embedded devices is critical to discovering
vulnerabilities prior to their adversarial exploitation. State-of-the-art
automated testing methods rehost firmware in emulators and attempt to
facilitate inputs from a diversity of methods (interrupt driven, status
polling) and a plethora of devices (such as modems and GPS units). Despite
recent progress to tackle peripheral input generation challenges in rehosting,
a firmware's expectation of multi-byte magic values supplied from peripheral
inputs for string operations still pose a significant roadblock. We solve the
impediment posed by multi-byte magic strings in monolithic firmware. We propose
feedback mechanisms for input-to-state mapping and retaining seeds for targeted
replacement mutations with an efficient method to solve multi-byte comparisons.
The feedback allows an efficient search over a combinatorial solution-space. We
evaluate our prototype implementation, SplITS, with a diverse set of 21
real-world monolithic firmware binaries used in prior works, and 3 new binaries
from popular open source projects. SplITS automatically solves 497% more
multi-byte magic strings guarding further execution to uncover new code and
bugs compared to state-of-the-art. In 11 of the 12 real-world firmware binaries
with string comparisons, including those extensively analyzed by prior works,
SplITS outperformed, statistically significantly. We observed up to 161%
increase in blocks covered and discovered 6 new bugs that remained guarded by
string comparisons. Significantly, deep and difficult to reproduce bugs guarded
by comparisons, identified in prior work, were found consistently. To
facilitate future research in the field, we release SplITS, the new firmware
data sets, and bug analysis at https://github.com/SplITS-FuzzerComment: Accepted ESORICS 202
Joint User and Data Detection in Grant-Free NOMA with Attention-based BiLSTM Network
We consider the multi-user detection (MUD) problem in uplink grant-free
non-orthogonal multiple access (NOMA), where the access point has to identify
the total number and correct identity of the active Internet of Things (IoT)
devices and decode their transmitted data. We assume that IoT devices use
complex spreading sequences and transmit information in a random-access manner
following the burst-sparsity model, where some IoT devices transmit their data
in multiple adjacent time slots with a high probability, while others transmit
only once during a frame. Exploiting the temporal correlation, we propose an
attention-based bidirectional long short-term memory (BiLSTM) network to solve
the MUD problem. The BiLSTM network creates a pattern of the device activation
history using forward and reverse pass LSTMs, whereas the attention mechanism
provides essential context to the device activation points. By doing so, a
hierarchical pathway is followed for detecting active devices in a grant-free
scenario. Then, by utilising the complex spreading sequences, blind data
detection for the estimated active devices is performed. The proposed framework
does not require prior knowledge of device sparsity levels and channels for
performing MUD. The results show that the proposed network achieves better
performance compared to existing benchmark schemes
- …